12 research outputs found

    Mapping multivariate measures of brain response onto stimulus information during emotional face classification

    Get PDF
    The relationship between feature processing and visual classification in the brain has been explored through a combination of reverse correlation methods (i.e.“Bubbles” [22]) and electrophysiological measurements (EEG) taken during a facial emotion categorization task [63]. However, in the absence of any specific model of the brain response measurements, this and other [60] attempts to parametrically relate stimulus properties to measurements of brain activation are difficult to interpret. In this thesis I consider a blind data–driven model of brain response. Statistically independent model parameters are found to minimize the expectation of an objective likelihood function over time [55], and a novel combination of methods is proposed for separating the signal from the noise. The model’s estimated signal parameters are then objectively rated by their ability to explain the subject’s performance during a facial emotion classification task, and also by their ability to explain the stimulus features, as revealed in a Bubbles experiment

    Reverse Engineering Psychologically Valid Facial Expressions of Emotion into Social Robots

    Get PDF
    Social robots are now part of human society, destined for schools, hospitals, and homes to perform a variety of tasks. To engage their human users, social robots must be equipped with the essential social skill of facial expression communication. Yet, even state-of-the-art social robots are limited in this ability because they often rely on a restricted set of facial expressions derived from theory with well-known limitations such as lacking naturalistic dynamics. With no agreed methodology to objectively engineer a broader variance of more psychologically impactful facial expressions into the social robots' repertoire, human-robot interactions remain restricted. Here, we address this generic challenge with new methodologies that can reverse-engineer dynamic facial expressions into a social robot head. Our data-driven, user-centered approach, which combines human perception with psychophysical methods, produced highly recognizable and human-like dynamic facial expressions of the six classic emotions that generally outperformed state-of-art social robot facial expressions. Our data demonstrates the feasibility of our method applied to social robotics and highlights the benefits of using a data-driven approach that puts human users as central to deriving facial expressions for social robots. We also discuss future work to reverse-engineer a wider range of socially relevant facial expressions including conversational messages (e.g., interest, confusion) and personality traits (e.g., trustworthiness, attractiveness). Together, our results highlight the key role that psychology must continue to play in the design of social robots

    Equipping Social Robots with Culturally-Sensitive Facial Expressions of Emotion Using Data-Driven Methods

    Get PDF
    Social robots must be able to generate realistic and recognizable facial expressions to engage their human users. Many social robots are equipped with standardized facial expressions of emotion that are widely considered to be universally recognized across all cultures. However, mounting evidence shows that these facial expressions are not universally recognized - for example, they elicit significantly lower recognition accuracy in East Asian cultures than they do in Western cultures. Therefore, without culturally sensitive facial expressions, state-of-the-art social robots are restricted in their ability to engage a culturally diverse range of human users, which in turn limits their global marketability. To develop culturally sensitive facial expressions, novel data-driven methods are used to model the dynamic face movement patterns that convey basic emotions (e.g., happy, sad, anger) in a given culture using cultural perception. Here, we tested whether such dynamic facial expression models, derived in an East Asian culture and transferred to a popular social robot, improved the social signalling generation capabilities of the social robot with East Asian participants. Results showed that, compared to the social robot's existing set of facial `universal' expressions, the culturally-sensitive facial expression models are recognized with generally higher accuracy and judged as more human-like by East Asian participants. We also detail the specific dynamic face movements (Action Units) that are associated with high recognition accuracy and judgments of human-likeness, including those that further boost performance. Our results therefore demonstrate the utility of using data-driven methods that employ human cultural perception to derive culturally-sensitive facial expressions that improve the social face signal generation capabilities of social robots. We anticipate that these methods will continue to inform the design of social robots and broaden their usability and global marketability

    Grounding deep neural network predictions of human categorization behavior in understandable functional features: the case of face identity

    Get PDF
    Deep neural networks (DNNs) can resolve real-world categorization tasks with apparent human-level performance. However, true equivalence of behavioral performance between humans and their DNN models requires that their internal mechanisms process equivalent features of the stimulus. To develop such feature equivalence, our methodology leveraged an interpretable and experimentally controlled generative model of the stimuli (realistic three-dimensional textured faces). Humans rated the similarity of randomly generated faces to four familiar identities. We predicted these similarity ratings from the activations of five DNNs trained with different optimization objectives. Using information theoretic redundancy, reverse correlation, and the testing of generalization gradients, we show that DNN predictions of human behavior improve because their shape and texture features overlap with those that subsume human behavior. Thus, we must equate the functional features that subsume the behavioral performances of the brain and its models before comparing where, when, and how these features are processed

    Facial expressions elicit multiplexed perceptions of emotion categories and dimensions

    Get PDF
    Human facial expressions are complex, multi-component signals that can communicate rich information about emotions,1, 2, 3, 4, 5 including specific categories, such as “anger,” and broader dimensions, such as “negative valence, high arousal.”6, 7, 8 An enduring question is how this complex signaling is achieved. Communication theory predicts that multi-component signals could transmit each type of emotion information—i.e., specific categories and broader dimensions—via the same or different facial signal components, with implications for elucidating the system and ontology of facial expression communication.9 We addressed this question using a communication-systems-based method that agnostically generates facial expressions and uses the receiver’s perceptions to model the specific facial signal components that represent emotion category and dimensional information to them.10, 11, 12 First, we derived the facial expressions that elicit the perception of emotion categories (i.e., the six classic emotions13 plus 19 complex emotions3) and dimensions (i.e., valence and arousal) separately, in 60 individual participants. Comparison of these facial signals showed that they share subsets of components, suggesting that specific latent signals jointly represent—i.e., multiplex—categorical and dimensional information. Further examination revealed these specific latent signals and the joint information they represent. Our results—based on white Western participants, same-ethnicity face stimuli, and commonly used English emotion terms—show that facial expressions can jointly represent specific emotion categories and broad dimensions to perceivers via multiplexed facial signal components. Our results provide insights into the ontology and system of facial expression communication and a new information-theoretic framework that can characterize its complexities

    Modeling individual preferences reveals that face beauty is not universally perceived across cultures

    Get PDF
    Facial attractiveness confers considerable advantages in social interactions,1,2 with preferences likely reflecting psychobiological mechanisms shaped by natural selection. Theories of universal beauty propose that attractive faces comprise features that are closer to the population average3 while optimizing sexual dimorphism.4 However, emerging evidence questions this model as an accurate representation of facial attractiveness,5, 6, 7 including representing the diversity of beauty preferences within and across cultures.8, 9, 10, 11, 12 Here, we demonstrate that Western Europeans (WEs) and East Asians (EAs) evaluate facial beauty using culture-specific features, contradicting theories of universality. With a data-driven method, we modeled, at both the individual and group levels, the attractive face features of young females (25 years old) in two matched groups each of 40 young male WE and EA participants. Specifically, we generated a broad range of same- and other-ethnicity female faces with naturally varying shapes and complexions. Participants rated each on attractiveness. We then reverse correlated the face features that drive perception of attractiveness in each participant. From these individual face models, we reconstructed a facial attractiveness representation space that explains preference variations. We show that facial attractiveness is distinct both from averageness and from sexual dimorphism in both cultures. Finally, we disentangled attractive face features into those shared across cultures, culture specific, and specific to individual participants, thereby revealing their diversity. Our results have direct theoretical and methodological impact for representing diversity in social perception and for the design of culturally and ethnically sensitive socially interactive digital agents

    Cultural facial expressions dynamically convey emotion category and intensity information

    Get PDF
    Communicating emotional intensity plays a vital ecological role because it provides valuable information about the nature and likelihood of the sender’s behavior.1,2,3 For example, attack often follows signals of intense aggression if receivers fail to retreat.4,5 Humans regularly use facial expressions to communicate such information.6,7,8,9,10,11 Yet how this complex signaling task is achieved remains unknown. We addressed this question using a perception-based, data-driven method to mathematically model the specific facial movements that receivers use to classify the six basic emotions—"happy,” “surprise,” “fear,” “disgust,” “anger,” and “sad”—and judge their intensity in two distinct cultures (East Asian, Western European; total n = 120). In both cultures, receivers expected facial expressions to dynamically represent emotion category and intensity information over time, using a multi-component compositional signaling structure. Specifically, emotion intensifiers peaked earlier or later than emotion classifiers and represented intensity using amplitude variations. Emotion intensifiers are also more similar across emotions than classifiers are, suggesting a latent broad-plus-specific signaling structure. Cross-cultural analysis further revealed similarities and differences in expectations that could impact cross-cultural communication. Specifically, East Asian and Western European receivers have similar expectations about which facial movements represent high intensity for threat-related emotions, such as “anger,” “disgust,” and “fear,” but differ on those that represent low threat emotions, such as happiness and sadness. Together, our results provide new insights into the intricate processes by which facial expressions can achieve complex dynamic signaling tasks by revealing the rich information embedded in facial expressions

    Dynamic facial expressions of emotion transmit an evolving hierarchy of signals over time

    Get PDF
    Designed by biological and social evolutionary pressures, facial expressions of emotion comprise specific facial movements to support a near-optimal system of signaling and decoding. Although highly dynamical, little is known about the form and function of facial expression temporal dynamics. Do facial expressions transmit diagnostic signals simultaneously to optimize categorization of the six classic emotions, or sequentially to support a more complex communication system of successive categorizations over time? Our data support the latter. Using a combination of perceptual expectation modeling, information theory, and Bayesian classifiers, we show that dynamic facial expressions of emotion transmit an evolving hierarchy of “biologically basic to socially specific” information over time. Early in the signaling dynamics, facial expressions systematically transmit few, biologically rooted face signals supporting the categorization of fewer elementary categories (e.g., approach/avoidance). Later transmissions comprise more complex signals that support categorization of a larger number of socially specific categories (i.e., the six classic emotions). Here, we show that dynamic facial expressions of emotion provide a sophisticated signaling system, questioning the widely accepted notion that emotion communication is comprised of six basic (i.e., psychologically irreducible) categories, and instead suggesting four

    Dynamic mental models of culture-specific emotions

    No full text
    According to the Universality Hypothesis, facial expressions of emotion comprise a universal set of six basic signals common to all humans (i.e., happy, surprise, fear, disgust, anger, sad; Ekman et al., 1969). In contrast, Jack et al. (2012) demonstrate that although Western Caucasians (WC) represent the six basic emotions with the same dynamic facial movements, East Asians (EA) do not. This raises the questions of (1) what are the basic emotions in the EA culture? and (2) what are the corresponding culture-specific facial signals of emotion transmission?   To address the first question, we clustered emotion words in WC (English, 50 observers) and EA (Chinese, 50 observers) culture. Each participant rated on a bipolar scale the pairwise similarity of selected emotion words in their own language (see Methods). We applied clustering analyses to these data. The English clusters comprised 8 emotion categories including the six basic ones, plus pride and shame (e.g., Tracey & Robins, 2004). In contrast, the Chinese clusters showed a more complex structure of 10 basic emotions (see Figure S1).   Using the resulting basic emotion categories as response labels, we applied 4-dimensional reverse correlation (Yu et al. 2012) to reconstruct culture-specific face signals that transmit each emotion. As in Jack et al. (2012), on each experimental trial the observer viewed a random facial animation generated by computer graphics platform. Observers interpreted the facial animation as expressive when the facial movements corresponded with their mental representation of that emotion (Figure S2). Reverse correlation analyses produced, for each observer, a dynamic model per basic emotion.   Our analyses revealed culture-specific facial expression signals, refuting the University Hypothesis. For the first time, we also derive the cultural face signals that articulate emotion communication in the EA culture (see Figure S3 for examples)

    Facial movements strategically camouflage involuntary social signals of face morphology

    No full text
    Animals use social camouflage as a tool of deceit to increase the likelihood of survival and reproduction. We tested whether humans can also strategically deploy transient facial movements to camouflage the default social traits conveyed by the phenotypic morphology of their faces. We used the responses of 12 observers to create models of the dynamic facial signals of dominance, trustworthiness, and attractiveness. We applied these dynamic models to facial morphologies differing on perceived dominance, trustworthiness, and attractiveness to create a set of dynamic faces; new observers rated each dynamic face according to the three social traits. We found that specific facial movements camouflage the social appearance of a face by modulating the features of phenotypic morphology. A comparison of these facial expressions with those similarly derived for facial emotions showed that social-trait expressions, rather than being simple one-to-one overgeneralizations of emotional expressions, are a distinct set of signals composed of movements from different emotions. Our generative face models represent novel psychophysical laws for social sciences; these laws predict the perception of social traits on the basis of dynamic face identities
    corecore